| brian carroll on Tue, 11 Sep 2012 12:47:23 +0200 (CEST) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
| Re: <nettime> subjective math. |
Michael H Goldhaber wrote:
How does your approach relate to or differ from Lotfi Zadeh's "fuzzy logic?"
Hello Michael, thanks for your interesting question.
I had not heard of Lotfi Zadeh because my path into logic
was through individual explorations with the alphabet as
a phase-changing system, experimenting with letters and
numbers, their reflections and rotations of shared structure.
This was verified by a university math teacher to be a form
of calculus, who recommended a course on 'probability'
which closely relates set theory and venn diagrams in
what may be considered a 'weighted analysis' of sorts.
This is the form of mathematics that should be taught in
early and all education because it is of practical value
for basic reasoning, in terms of its allowing for robust
evaluation and understanding of involved ramifications.
It allows someone to reason something is 'probable' in
a way that tends towards absolute truth in reasoning,
and this is different than saying it is likely or possible.
So there is deep empirical grounding it can reference
if ideas are mediated in terms of their truth and logic.
Going into this course I think the concept was already
existent of 'superposition' in terms of the alphanumeric
sign (HIOX) which is replicated in a 16-segment LED
display, essentially a union jack symbol that generates
all western letters and numbers. Further, for numbers,
the 7-segment display, an LED component used in
electric clock radios and equipment for readout, is
a simpler example of the superposition concept...
(note: I did not know the physics word 'superposition'
yet had some sense of the concept because of this...)
The 7-segment LED looks like a letter '8' that is more
rectilinear and box-like. Within this symbol, all of
the numbers - 0 1 2 3 4 5 6 7 8 9 - can be regenerated.
So in a sense they are suspended within this '8' that is
not actually an '8' when it is not lit-up, only outline
or matrix for a potential 8 or 1 or 3, etc. So there is
a potential number, 0 through 9, that can be displayed.
Also hexadecimal letters (A B C D E F) can generate
from the symbol, so there is an alphanumeric aspect.
In this way, without calling it 'superposition' this same
"potentiality" was noticed in these cultural symbols,
the 16-segment or HIOX symbol (which is called that
because it equates to the overlaying of those letters)
often is seen in building details from ancient times,
yet also in federal buildings worldwide, in addition to
use in electronics as alphanumeric display components.
For the 16-segment, the entire alphabet and number
system can be recreated from a single symbol, so in
this way its potentiality is 26 letters and 10 numbers
though it can go beyond that given extended signs.
Investigating the structural relations between letters
and numbers, especially after reading The Republic
by Plato where this activity was directly referenced,
became a major question and cultural enigma: why
is this not being discussed, why is there no record of
such an amazing ordering device, if not 'parti' (which
a professor described as being 'organizational logic').
Especially its code-like or mastercode-like attributes
in civilization. Plato had described part of it in Meno
yet only a part of the symbol related to its geometry.
I did not know what to call any of this until taking the
course in probabilities and then it became possible
to begin conceptualizing the condition it exists within
in terms of exponential counting, or relationships, if
it can accurately be described this way. The enigma of
superposition can be presented as a riddle and this
is easily demonstrated by these generative symbols.
For example, a 7-segment display could be used to
animate a sequence of the letter E and number 3 in
a spinning condition, around a central axis, which
then would appear as a number '8' (if not letter B) on
the electronic display or even via a physical motor
with a single alphanumeric shape, a number 3 on
oneside and a letter E on the other, set spinning.
If observing this spinning shape, knowing it may
be either a letter E or number 3, yet because of its
motion it is indistinguishable and merged into one
entity, essentially -both- letter -and- number, then
its potentiality may be hard to determine precisely
at a given observation, is it a number 3 or letter E?
What if it is somewhere between, what does that
mean - in that a gradient could potentially exist.
This is essentially the limit of binary reasoning,
a boundary condition for observation. Depending
on how fast it is spinning, the blurring that occurs
could either constitute its own entity, such that it
is only possible to see in terms of its being '8',
which is outside the question (3|E), because it
is precisely not just one or the other choice. So
the transformative condition in which the objects
exist cannot be evaluated in the binary terms, as
if they are still simply distinguishable as a 3 or E.
Or if the spinning is slowed down, depending on
if an electronic display or physical model, it may be
possible to say, that it is a letter at one moment in
its rotation, or that it is identifiably number. Though
this situation can vary in the extremes given how it
is set up as an experiment to evaluate observation.
If the 7-segment simply blinked an 'E' or a '3' it would
be possible to determine in binary [yes/no] terms if it
was either this or that, if the 7-segment display was
switching back and forth slowly enough. Yet at some
threshold it becomes blurred, imperceptible in that a
distinct number or letter are lost to their fusion.
With an analog model, with a motor rotating a vertical
axis with a sign that is 'E' on one-side and '3' on the
other, speeding this up into a blur, then a fusing of
these two into one. If very fast it would be difficult
if not impossible to distinguish which is which in the
given naked eye observation. It could be both 'E' and
'3' at the same time because it is moving so quickly.
If this physical model was slowed down, an observer
positioned at some location around its 360 degree
rotation could potentially in standing still watch as
as the number 3 becomes the letter E, going from an
exact reference, where 3=3 or E=E, momentarily, to
a partial match, where it is mostly 3 or mostly E, and
in this way as the analog shape (a cutout form that is
spinning on its axis, with a letter on oneside, number
on the other, if not each side a different color) turns,
it then is in a middle-state, or 3-value condition, that
is somewhere inbetween an exact binary category. In
that a flattened shape exists as it moves towards the
central axis, like a capital 'i' or 1, given its depth.
3 ==> | ==> E ==> | ==> 3
(example of 2D/3D shape spinning on vertical axis)
Another way to convey this is that as the shape turns
its horizontal 'arms' become shortened, thus warped
and the shape itself warps until it is flattened out as
it spins on axis. So there is shortening of these arms
as the 3 turns sideways, where it appears only as: |
and likewise with the E as it turns and flattens out.
This is equivalent to opening up a graphics editor,
typing out a letter or number in a font, and then
choosing to modify its width, from 100% to say
50%, then 30%, then 1% at which point it is: |
And what this is to question is at what point does
the E stop being an E as an exact match: E = E.
And likewise for the number three. At what point
is it only a partial-E, not an exact match to what is
known to exist (the shape seen directly) versus at
an oblique angle while it is moving and changing.
And thus is a 99% E still an E or is it crossing some
boundary condition, where it is largely the same yet
taking on more and more paradoxical qualities as it
moves into a less clearly distinguishable realm. And
so how is it determined when it stops being what it is
based on observational conditions, how it is viewed.
I did not have an answer to this, it was a question
that for me was quintessentially of paradox. And
yet knew of this potentiality of these symbols. That
their basic character involved this dimensionality
and yet there were no words that could describe it,
only later was the concept of superposition heard of.
The thing that opened my mind during the probability
course was the classic coin toss example. Oddly, this
was contemplated before mentioned in this thread yet
also is prescient because like a random-event-generator
pulling 'forms' out of the noise of the shared atmosphere,
there is some potentiality to thinking as it exists in the
world especially in overlapping realms of empirical reasoning,
intuition, psychological if not psychic, that directly relates
to the issues of probability in its statistical grounding of
information as pattern. And so there are some wonderful
'coincidences' in the parallelism of shared consideration
and it seems this could be implicit yet often in the unsaid,
that such modeling exists in our distributed evaluations
yet they may never be linked or conveyed in real-time.
To the coin-toss, it is a classic 50:50 scenario seemingly.
The following URL has a helpful interactive explanation,
and reminds me of an important lesson from probability;
that it mathematically involves the area between 1 and 0.
1 is associated with truth and 0 is associated with falsity.
Probability - theory of tossing coins
http://gwydir.demon.co.uk/jo/probability/info.htm
And so calculations that often exists as 'towards 1' such
as .99 which is still not absolute truth or 1, yet highly
like 1, nearly so, and in this way it would be easy to say
it is 'highly probable' if it were regarding an evaluation
in terms of its weighting, .30 would be closer to falsity,
and thus 'improbable' in terms of tending toward truth.
This is relevant if 'reasoning' were occurring at .10 in
terms of its empirical evaluation, or .05 out of 1, which
is what the previous claim of subjective math involves.
Whereas grounded modeling could tend toward .99 via
a rigorous methodology going beyond this languaging.
In any case an epiphany occurred in class because this
classic coin-toss is the ultimate binary assumption, that
a coin lands on either 'heads' or 'tails' -- even the above
explanation makes it seem this simple and yet because
of probability it is astounding not correct. In other words
this classic 2-value situation actually exists in 3-values
and under certain highly improbable conditions the coin
could land on its edge (!) -- meaning, heads/edge/tails.
It is a miniscule change, born of mathematical analysis,
and yet if tossing a coin enough it could feasibly occur.
In the 2-value analysis, such an event would be in error
and ignored, a new toss would occur to substitute for the
anomalous event, because it is outside the criteria used
for evaluation of heads or tails. Whereas 'edge' is both
or neither heads or tails. 3-value.
What is amazing about this 'potentiality' is that a coin-
toss is not a 50:50 situation, it is more like 49.999%
and 49.999%, likely more nines on each, yet in that
approximation somewhere is hidden this other event,
this other possibility or potential that challenges the
model and yet can remain otherwise unaccounted for
due to prevailing binary assumption. Thus an issue of
ambiguity in what could be the most clear-cut example
of binary processing and decision-making, flipping an
object on one-side or another and determining what
side it lands on or is observed. And that this sidedness
can become ambiguous or its boundary condition can
at times not describe the event that actually occurs.
Whoever has played football (soccer here in the U.S.)
may have been at a coin toss to determine the kickoff,
which team starts play. In those conditions, depending
on the length of grass, perhaps its variety, density even,
a coin toss can at times be itself inconclusive -- the coin
can land close to vertically within grass and thus a new
toss is required to determine which team has the ball.
In other words, in some cases it is not entirely vertical
(I) yet may only be 5% heads and thus inconclusive,
and this again is that question of gradient, gray-area.
It is also possible that within a given environment, say
a highly magnetic force-field generated realm, that a
tossed coin could be prevented from falling on its side,
such that it remains vertical given its peculiar context.
And thus indeterminate if viewed as heads or tails.
So in the classroom an empirical statistical anomaly
actually could be encountered in everyday situations
and the binary coin toss (that is not always this) is an
example of another realm currently unaccounted for.
While this 'potential' for a third value exists in binary
considerations, it is not allowed to or disregarded,
these important anomalies where experience and
knowledge reside, realms of observations invalidated
by binary reasoning because it does not fit into the
simple black or white worldview, true or false, on/off.
So in that situation: what may be 1 in 10,000 odds
if not far lower of a coin landing on its edge within
certain conditions, is compared to the probability it
lands on its side, one or the other, is nearly certain
or probable (nearly 100%) or .9999999 whatever.
(percentages and probably are different realms,
though .9999 as it relates to truth or 1 can also
be translated into 99.99% as related to 100%).
It is about proportion. It is almost impossible or
highly improbable, this 3-value scenario in the
coin-toss, yet in real-life it is often encountered.
And thus mathematically, this binary evaluation
(evaluation of 'truth' and 'falsity') is nearly certain
in its capacity to model such an either-or-event.
Such that this statistical anomaly is disregarded
in explanation of the coin toss. Yet it is potential.
This potentiality, its hidden quality that can be
expressed under certain conditions, was what
opened up the question of 3-value consideration
for me, in that it allowed a way of understanding
and evaluating paradox in its 'inbetweenness.'
If it is not a heads or tails, what is it? Well, it
can be this other value, this third option. And thus
for instance the question of the 7-segment display
with an E or 3 in a blurred merged unification
could become this third value. And further, if it
is modeled in analog it could be even more this
as the gray-area could expand to partial-states
whereby N-value observations or finer sampling
of the figure could chart its movement across the
realm from 0 to 1 and 1 to 0 via continual rotation;
moving from 3 into not-3, else 3, partial-3, not-3
for instance, to even .3 or .1 or .99 evaluations.
So what do you call something that is potentially
many things yet not distinguishable as these things.
That there is some probability it exists in the structure
yet may not be on the surface for observation given
existing conditions. Say it is embedded ordering or
latent part of its structural potential yet not activated.
The 7-segment or 16-segment LED displays in this
way relate to probability in so far as they potentially
can display all numbers and hexadecimal for the
first, and all numbers and letters in the second, and
yet any letter or number may not be visible within a
given observation or particular conditions. And yet
if either of these symbols were itself lit up, what is
the possibility it could be a given letter or number
or some combination thereof. For instance, if there
is a 7-segment display lit up as the number '8' as is
its iconic form, each of the letters b d p q could be
in that particular shape. If there were only 2 letters,
what is the probability of any given pair, combined.
They are not all equal, b and d together would not
add up graphically to an 8, nor the letters p and q.
Yet the b and q could else d and p could. So the
odds of various contexts and situations can shift
according to the way things are structured and
how they relate, according to number but also
beyond this, to include that of pattern-matching
in certain scenarios. It is in this way logic can
become an issue of visual reasoning, where it
is about modeling ideas and shared structuring,
where it enters into a realm of puzzlework, ideas.
Fitting together, rearranging, identifying patterns.
It is difficult to know if this is conveying the basic
idea given the abstraction necessary to share a
real-world example that could be locally tested.
What it is an attempt to provide is a basis for how
3-value observations are a regular occurrence yet
remain this day mediated in binary 2-value terms,
thus forcing both inaccurate observations yet also
ungrounded 'absolutist' frameworks for reasoning
whereby [absolute truth] is assumed the default for
basic exchange when it is closer to .0000001, etc.
The discovery for me was that what was modeled
in probability is actually inverted in real-world
practice, in that experience tends towards 3-value
not 2-value, and so the latter is improbable versus
its assumed authority for decision-making and for
determining what will be true by ignoring anomaly.
In other words, the probability that absolute truth
is being mediated without error is tending towards
absolute falsity (0) versus that it is highly probable
given that things may not function in A=A dynamics
yet may be forced into those framework by default.
At some point I think when there was a levitating
magnet on dry ice at a campus exhibit that talk of
superposition may have been encountered or of
consideration of molecular structures in reading.
And in the way it offers a conceptual description,
as something could be 1/3rd or 2/3rds something
that it begins to offer a way to describe how partial
states can exist inbetween the end parameters that
a situation spans, in the gray area between 'sides'.
3=3 partial-3 | partial-E E=E
That realm between 3=3 and E=E is the gray-area
and this is a realm that can be ambiguous and may
not even be known as 'partial 3' or 'partial E' it may
instead be 'unknown' or 'unidentifiable' in contrast.
It could be not-3 and not-E yet function within the
parameters outlined, where a question of 3 or E
may instead be evaluated in terms of 'other'. If the
actual substance of this is under review, it is further
to propose that 3=3 or E=E may not be real-world
conditions and it may only ever be: A = ~A, or that
partial evaluations exist due to the inaccessibility of
absolute frameworks, beyond mathematic idealism.
While a basic experiment such as an animated LED
display could involve matching symbols, if this gets
into language and ideas, it more often than not most
likely or probably always involves a realm such as:
[A] [~A] <-------- | --------> [~B] [B]
Where A=A equates with 1 or truth, and B=B likewise,
these absolute yet contingent, such that observations
normally occur always within the range of [~A to ~B]
which is the grey-area of three value, n-value logic.
What this is to say is that scientific observation, most
mathematics, likely all language, exists in the partial
realm of ~A to ~B, or 'on the edge of the coin toss', in
that truth is mixed with bias, distortion, warping, error,
and thus is 'partially true' to greater or lesser degree
yet that impurity keeps it beyond the purity of 1, yet if
existing uncorrected, moves towards zero instead in
terms of exchange upon exchange upon exchange
in this partial condition. So in certain circumstances
it may not even be possible to mediate an either-or
framework or use binary decision-making because
the absolute framework does not realistically exist
as an observational condition, if anomalies exist,
such that it is only partially true, not wholly true.
In this way the realm of A and B are removed as
'ordinary' conditions, this grounded empiricality.
It is a presumption detached from the reality, to
include scientific observation on its own terms
without accounting for truth outside its biasing.
In other words, what is modeled by probability
relates to how something exists and is evaluated.
If a coin is only assumed to have 2-sides and no
depth, the edge is only a fiction, and yet it is not,
if it has an edge then it is somehow part of the
modeling. Likewise if a concept or idea has
N-dimensions and only say 10 of them are
referenced, this is only a partial evaluation of
its potential as an [idea], and therefore this is
how superposition can exist in terms of the
empirical modeling of things, their relativisms
each a probability to some degree or another,
as facets of the larger whole, yet like the classic
story of those touching parts of an elephant, it is
not an elephant until brought together whole, in
this way, so too, ideas concepts words meaning.
That is why grounding of circuitry is required for
accurate modeling that removes error which in
itself statistically tends towards its own falsity,
whereas today this is relied upon as structure.
Someone can speak of the foot, all in itself as
the whole and either disregard someone else
speaking of the tusk or they likewise the other,
and the observations never add up, empirically.
Its turned inside out, a universe of elephant foot,
elephant ear, elephant tusk, yet no elephant here.
The elephant in the room invisible because of this.
The issue of fuzzyness then in terms of ambiguity
in the potential states of a given observation and
its criteria, its range of questioning. The term was
heard at some point online and intuitively already
understood, because of this reasoning process.
No formal mathematical relation beyond that of
the 3-value discovery within the coin toss example,
which is essentially opening up a larger realm of
the n-value, which fuzzyness tends to describe in
that it has a larger range of potential ambiguities.
The wonderful aspect of reading about Lotfi Zadeh
is that his discovery and mathematical development
was seen related to his overlapping heritage, in that
common everyday experience of how questions exist
and how they are mediated. And then with his skills
translating this observation via mathematic reasoning
which I cannot read so cannot truly relate to the work.
It is an abstraction modeling the characteristic in itself,
it would seem, whereas by comparison it could be an
issue of mediating superposition of [countries] or of
[heritage] and remain valid in terms of 'fuzzy logic' as
this relates to 3-value and more likely n-value views.
e.g. X = country ~X = { .IR .RU .US .AZ }
In terms of grounded reasoning, everyone has the
capacity to be a logician, to evaluate 'logic' in these
everyday terms and it is only an issue of literacy and
providing a way of communicating what is already
known and already mediated yet forced into the
false perspective of binarism. And so a person does
not need to be a mathematician or read complex
notation or other calculative equation-based views
to get at the essence of things, their question and
accurate appraisals and to analyze and develop
models and hypotheses for testing and evaluation.
A=A and A=~A and 1 and 0 and probabilities and
weighting between truth and falsity (.99 <-> .01)
would allow a majority of empirical reasoning to
take place and be mediated in debate and within
shared observation, accounting for truth and error.
Fuzzy logic and fuzzy math and fuzzy sets have
been related to ambiguous data, especially with
robotics and machine-vision where a robot will
evaluate a situation in terms of pattern-matching,
where it may only have partial-matches to what
it already knows, and thus like a puzzle, it seeks
to identify what matches and what does not, and
something may be A = A and recognized while
others may be A = ~A and only a partial view of
something larger and it may remain a question.
And so it is contingent, a potential, a possibility.
Is the robot in one hallway or another, maybe it
needs to turn around and add up another view
to get more information and then might know.
Else maybe it needs to start mapping beyond
its known boundary and thus extend off of the
data sets already established, building on their
structure, as hallways connect or light fixtures
change or obstacles are repeatedly encountered
and thus become established reference points.
A puzzle then, a question, a person or robot in
a condition, partly knowing and partly not, and
only in some cases mediating A = A, while in
many others it could be functioning beyond
this, in partial realms where ~A predominates
the relationship between observer & observed.
Now what if our robot, ourselves, were to assume
what is the operating model for our machine vision
is A=A when it is actually not, and thus reasoned
viewpoints are ungrounded, not mapping the
territory that exists, only virtual if not unreal. And
so what would happen if the robot suddenly were
to shift to ~A = ~A modeling, that in its resolution
can tend towards a grounded A=A condition, yet
only after connecting every perspective, accurately
evaluating every angle of every potential observer
and then mediating the shared condition this way.
An example of the ambiguity of observation is
provided by a prevailing question in 1872 about
whether a galloping horse has one foot on the
ground or if all of its feet are off the ground at
the same time. This could be a question of belief,
some may say 'yes' there is a foot on the ground
at all times, others may say 'no' there is not. Yet
because it was not possible to actually tell since
the movement was so quick, that like a spinning
object in superposition, a potential existed that
either event was a possibility. And thus, it could
be evaluated as a 3-value situation, where the
observation exists in 'the unknown', between
these two choices. And so when Muybridge
the photographer did his animal locomotion
photographs, it resolves this scenario by
enabling a finer sampling rate, by allowing
the human eye to catch up with the pace of
the horse, and so to see beyond its boundary
and verify that all feet are off the ground and
the horse is in flight during its galloping state.
What could not be determined and was in a
realm of ambiguity was through further and
increased examination then determined to
be on a given side. Yet without that extra
involvement of inquiry, it could remain in
a realm of the unknown. A possibility or
potential yet inconclusive, not yet known.
Though it could be believed, assumed.
Most language exists in this [ambiguity],
especially those [concepts] referenced in
their [archetypal] permanence, as if they
are already known by being referenced,
when instead, these are scenarios not of
[on/off] in terms of their truth, instead
they are more [galloping horses] in terms
of the [unknowns] mixed in with the knowns
that remain undifferentiated in language.
Logic and empirical grounding of the
observers, like Muybridge's camera to
the question of the gallop, allows this
finer resolution of sampling to occur,
by combining all views of the event,
then to determine its legitimacy in its
wholeness versus in its [partiality].
In that a person could witness one hoof
off the ground and assume all hoofs,
and another one hoof on the ground
and assume all hoofs on the ground.
And so concepts, ideas, reasoning
ad absurdum, in terms of exchange.
Reading and writing today could be
equivalent to pattern matching, and
given the means of logic, how accurate
or inaccurate a given view may be if it
is grounded in truth or within a virtuality
that tends towards an absolute falsity.
The idea is a circuit that grounds through
its logic back into the truth that structures
and sustains it. If the idea is faulty, or its
structuring, it collapses upon inspection,
cannot withstand the forces of evaluation
due to bias, warping, distortion, other views.
The empirical is made & meant to withstand
this, because it is based within truth, robust
in that truth is its foundation, its structure,
logic establishing and supporting this. And
here to convey that it is and can be 3-value
logic, that is tending towards absolute truth of
the binary, of highest probability observations
that the improbable and-or inaccurate would
not be allowed alongside unless its truth is
proven and removed of error, otherwise it
would make contaminate "the entire map".
The threshold for logical reasoning, for
exchange, is truth. It is the requirement.
Not just affect or opinion or assumption.
And this tends towards debate, contention,
argument, hypothesis and shared modeling.
The challenge of ideas versus their neutering.
The beauty of thought versus its incapacitation.
In other words, these words are written to try
to convey something, yet more and more words
must be written to clarify what already cannot be
said by their use. It is as if writing for clarification
and yet never achieving, only approximating it.
Like it is never within this framework even, yet
attempting to be accessed within such language.
To write new ideas takes more words, many more.
Because there is no pre-established view to access.
So it is tremendously inefficient and time-consuming
to convey original thinking versus what is already
answered and can be assumed as a shared POV.
In this way, communication tends to the ideologic
and as language by default, to the ungrounded.
In this way truth is outcast from conversation and
relativized in gated communities of binary opinion.
These dynamics of: lesser truths > greater truth
that biasing, limits, boundaries, warping allow.
So what if logic is actually something ordinary and
not removed from common experience, that it is
natural and part of inherent processing abilities,
and it is more an issue of reconfiguring it so that
it is more accurate and aligned with truth, than of
having some esoteric expert practice requiring of
new degrees to achieve, versus of common sense.
Perhaps it is like puzzlework, each of us our own
puzzles to mediate, questions, perspectives, and
observations, given dimensionality: puzzle-logic.
What if [concept1] and [concept2] are viewed in
this framework of missing pieces and alignments
and order and structure and relationship between
various states and conditions. What if [this] and
[that] are mediated both by a given individual and
also by a group, and some have some pieces and
others have others, and that only together will the
combined [this-and-that] be achieved in its gradient
resolution, as the N-dimensional hoofs or the other
details are brought together in a shared framework
or coherent structure, to error-check & error-correct,
and in this way to more accurately model what exists.
The difference between a 2-value and 3-value
paragraph could be that a biased 2-value person
may mediate it in onesided terms, such that it is
viewed as a 100% true in language, due to belief
removed of its validation in actual truth. It can be
viewed as 'perfect' because some truth exists in it,
is carried by its scaffolding and perhaps this is its
potential, to convey truth via this conflicted medium.
And yet a 3-value evaluation could look at the same
paragraph and see it in more ambiguous terms, of
partial truth amidst partial falsity, and could only
determine what is true by removing what is not.
And in doing so, the sentences may fall away,
many interlinking words, until only some aspect
of some partial view of a larger conceptualization
existing beyond this paragraph were identified
as part of its validity, its extended connection to
empirical reasoning that validates its truth, via
others observation and the truth of the world.
In this way, whatever is true could be seen at
the higher rate of sampling required for its truth,
however finite and miniscule a point it may be
in another context, versus having it forced into
approximation where this truth equates with all
of this that is unrelated and unnecessary to it,
in its purity and isolation as a concept beyond
this instance. In this way, this sentence and this
paragraph in their nearly absolute imperfection,
in this difficulty of language that torments both
reader and writer who writes, to try to convey
and yet continually fail, due to this limitation,
this threshold, boundary condition, where what
is observed is not able to be seen or talked about
in this medium because it is so low resolution by
comparison, everything is a blur, this the default.
Only [details], only [partial views], hoof after hoof
after hoof, detail after detail, observation after
unconnected uncorrelated observation, etc.
Most essentially, [relativism] upon [relativism].
The binary view requires this condition to be
simplified and evaluated in an approximation,
that this tends towards perfection, in its ideality.
That to do this involves ignoring the ambiguity,
so that a typo can invalidate a claim based on
this superficiality of language upon its surface,
as if a shiny thing that to be bought and sold.
The 3-value view recognizes inherent error,
in this way, fuzzyness by default of ambiguity,
meaning that that absolute framework of true
or false is not the starting point, it is the end-
point of every effort combined and then still
only contingent upon its truth given evidence.
In this way, meaning, from ungrounded belief
that requires power for its authority and rule,
to logical reasoning based in grounded truth,
these two very different kinds of enlightenment,
one even standing against accounting for truth
while the other not existing without its integrity.
Then reasoning and exchange, issues like
[public] and [private], communicated about
how these might be modeled differently if
onesided biasing versus its neutralization.
Anything and everything, in this same way.
Fuzzy logic then does function as 3-value
such that it operates in this middle realm,
and to the degree of its sampling, it can
move further into and operate in N-value
conditions, zones where there may be
more unknowns than knowns, requiring
logical structures based on the existing
puzzle pieces to mediate what is unknown
via extension and evaluation of modeling,
along with new hypotheses, consideration.
Perhaps something is already in error that
is relied upon, and thus contingency, and
changing, refining of existing approach.
The condition that exists in observation yet
especially in language at its most evident
appears to be that what is observed as an
event exists in [superposition]. And that if
this is forced into a biased evaluation, it is
easy to say 'heads' or 'tails' and remove the
ambiguity and just assume truth or falsity,
based on pragmatism relevant to a limited
viewpoint, where either it is useful or not
and thus self-interested consideration yet
not necessarily aligned with larger truth.
And what the question of fuzzy logic or
3-value or N-value observation involves
is that every concept would start within this
consideration, where instead of absolute
truth being mediated in these [variables]
instead what is in superposition is this
question of their ~partialness, each and
every concept only a partial view of [X].
And therefore, if only a partial view, then
each concept would function as ~concept
instead or [~X] unless empirically modeled
and removed of all surrounding, supporting
error and distortion. In this way, context or
the contextualization of a variable cannot
be removed from its surrounding influence.
It supports and provides the framework and
'reasoning' as it were, all this extraneousness.
In this way, a paragraph mentioning [the state]
and [the economy] and various other attributes
would instead be modeled as [~the state] in its
partiality and [~the economy] in its partiality,
by default of the inherent ambiguousness of these
variables, that they are not the N-dimensional
'whole concept' that is separated from this same
surrounding error in language, nor as concepts
removed of it themselves such that what is said
about economy is only 'true'. Thus, until that is
what is referenced, it is not-true, only partially
so, and likely only minutely given the context,
if at all, such as the empty examples above.
A statement such as [X] [Y] [Z] then while it is
ideal for binary processing in yes/no terms,
would instead by default be evaluated as
[~X] [~Y] [~Z] in a fuzzy logic framework of
three value and n-value considerations, in
that this is where ideas start, communication
exists, exchange, 'reasoning'. And thus while
ideas may be processed as [XYZ] they are in
truth more accurately and realistically occurring
in a realm of [~XYZ] as an observational condition.
Thus the weighting of truth, 'tending toward truth'
or 'toward falsity' via weighted analysis/evaluation
where probabilities have a fundamental role in
the reasoning process, for grounding of ideas.
The 'virtual state' relies upon XYZ viewpoints and
their communication as if acceptable, tolerable,
and is the basis for power and authority today.
That it cannot be challenged without retaliation,
forced submission, this is its power, the oppression.
Ungrounded relativism is required to sustain its POV.
Beliefs which function beyond their accounting in truth,
such perspectives a basis for action; "uncertainty", etc.
All of these things and viewpoints can be destroyed.
Discredited, proven false, completely obliterated as
sustainable ideas within a logic-based evaluation.
Entire worldviews can be dismantled immediately.
And the thing about logic is that it is transparent.
Check the code. Reason it. You have higher truth,
prove it. Show your cards. Stop the bullshitting.
Stop playing games. Let's get down to 1's and 0's.
The big issue is that a [concept] actually exists
in a state of superposition, a grey-area that is
being unaccounted for, not properly observed
or accurately evaluated and these 'errors' allow
for the given reality, by ignoring the anomalies
involved, as if a simple heads tails coin toss.
[X] as absolute truth is the default assumption
versus that of [~X] in a state of superposition,
each and every idea only a partial viewpoint.
In this way, 3-value logic assumes imperfection
and the question is where things start, that this
is not in the absolute truth, it is in the gradient,
on the edge of the coin as normative condition,
the inverse of the coin-toss in its probabilities.
The probability [X] is referenced by [~X.POV1]
of N-dimensions, say a billion, tends towards
.00000001 of a truth, potentially, not its entirety.
For that every partial viewpoint would need to
be combined and error corrected to accurately
represent the truth of X via shared observation.
That is not the default condition and yet for the
binarist, this is the place assumed things start.
As if X=X simply by referencing the sign, even
if it is only virtual, ungrounded from the reality.
X + (trust me) = Y becomes the situation.
X + (historical reference) = Y assumes the same.
There is a grey-area in [~historical reference]
that never was resolved or removed of its error,
and so in referencing it, it is not in its purity that
it exists, it is in this self-annihilating context which
requires this ambiguity and like a parasite, feeds
off of this approximation of truth, this inaccuracy
of interaction and observation to entrap and to
confine it and make it impossible to go beyond
this limiting framework, which suits binarists fine,
because perhaps this is precisely its purpose.
Once the puzzle is pieced together, the view
here collapses, as if truth suspended within
another medium, its skeletal aspect tied to its
life, buried within this excessiveness of words,
as if what is required is dusting away with the
small gestures of hand brooms, excavations
here and there, yet what if all of it were to be
removed at once, the sustained falsity then
what may be made visible of concepts if they
exist beyond this, no longer hidden within it.
In other words, ideas existing between truth
and falsity, the approximation that occurs is
to estimate and seek to determine given the
ambiguity and given other viewpoints, what
may actually exist, this in terms of questions.
Whereas a binarist may force only answers,
via a deterministic and predetermined view
based on beliefs, carried over assumptions.
Thus a showdown, duels, challenges, this is
the classic role of thinkers, to take on these
situations, to reveal a greater truth involved.
And so what if an ordinary citizen also has
this capacity to logically reason against the
machine-view and its many true-believers.
A few years ago I wrote about panoptic logic
which is a particular perspective that allows
for every view of a 360^ situation to be seen,
as observers encircle a common experience,
each having their own observations that may
be unique, and also involves paradox, such
as the 7-segment rotating 3|E. If this blurring
were to be stopped, and a single observer
to look at the situation, they may only see
one-side of it, an E or 3 or partial condition
if not its edge. Whereas if every person who
surrounds this event were to see in their own
way, their own part or partial view of it, that
in their correlation and combination, it would
be possible to model the whole, as both a
3 and an E and to resolve the spinning 8.
Without every observer, the partiality may
never been overcome. Yet with high enough
resolution of the actual condition or situation
it feasibly would be possible to account for
every thing accountable, within whatever
limits may exist, to gain the most accurate
modeling from which to evaluate the event.
Instead of having it bound to one observer
and their partial view and-or then another.
If all truth were accounted for, perhaps even
in the blurred state, that it could be deduced
to some degree that one-side could be a 3
and another an E, for instance if like with
Muybridge a technological extension were
used, at every point of observation and thus
even at speed, via these advanced tools, to
allow for such dizzying specific observations.
Such that likewise, each person could say,
30% E and 55% 3, around the circle, until
the enigma is resolved. The issue could be
that people are the ones who are spinning.
The observer is constantly in motion in this
erratic way, without stabilized observation.
Essentially without gyroscope or compass.
Maybe the world is not so confused or so
entirely confusing and instead it ours, our
confusion, our incoherence, as a whole.
Maybe our "cameras" which can sample
do not have our maps lined up together,
and thus we each get lost in our own
hallways of the labyrinth, and yet only
can access a puzzle piece here or there
of another, instead of every known solution
where truth resolves a condition collectively.
What if each our individual puzzles actually
a cosmic puzzle, each our own detail of it,
yet without each other, no such cosmos
exists, no such map to reference as real.
Maybe the code saying too simply yes/no
is distorting vision, creating false views
and exchange with apparitions instead.
Changing the code, what might distributed
sensor networks relay as puzzle pieces if
what is communicated were neutralized of
bias, or shared perspective were possible,
the potential within each view as it may be
related to others via its truth. In this perhaps
my broken camera and corrupted coding of
events and their relay combine with another
who is not so broken in these ways and has
their unique vantage to share and likewise
others, to fill in various gaps of the puzzle.
It is this navigation, this use of referents
for their directionality, that is requiring of
the integrity of language. And thus it is
proposed it begins in this impossibility
and ambiguity, imperfection, fuzzyness,
and by recognizing it, allows realignment
and also calibration of observation and of
reasoning, via shared modeling, concepts.
And so of these, of all the concepts and of
all the language and of all the questions of
code, and technology, and culture, there is
at its base, its origin and foundation in logic.
That truth is validated, established this way.
That it is what reasoning is essentially about.
Accessing, mediating, sharing, clarifying this.
Today in an imperfect world with chaotic and
ungrounded connections with one another,
3-value logic is a tool to establish this truth
and relations within it, via acknowledging
this context of partiality it involves. And in
this way it is entirely possible if not probable
to address each and every issue that is of
highest purpose through these means, yet
it requires this more intense involvement
with ideas to accomplish, where words
and their meaning is not simply arbitrary
when it suits and absolute when needed.
And thus it seems impossible to not relate
the observer with what is observed. If the
observer can see themselves in 3-value
terms, their own imperfections, so as to
relate outward and elsewhere this way.
Some cannot, via whatever limitations,
and so may not be able to transition to
a different way of approaching things
immediately. Yet others may find it very
natural and just an issue of voicing of
something already known by experience
and thus may intuitively and inherently
understand or even provide additional
clarification about what this all involves.
And so perhaps such people can mediate
these questions from the start in terms of
exchange, with challenging 2-value views
yet also in doing so, not from a position of
'perfection' - instead of humility to truth
and its observance- that modeling a situation
often goes beyond a given viewpoint or
perspective, and requires neutrality for
observation, error-correction, and also
contingency, in addition to the unknown.
It's a different mindset than what exists.
Because if truth prevails, we all prevail.
An ungrounded binarist would be terrified
by the uncontrollable unknown, they want
to stop it from being allowed, its recognition
because that is its reality, becoming [sign].
A 3-value or paradoxical thinker is at least
operational within the ambiguity, unlike the
insecure binarist, and can navigate chaos
via hidden order otherwise unaccounted for.
It is an intuitive realm, this typic potentiality.
The working-model or hypothesis key, these
as concepts even, frameworks or whatever
they may be called. Thus, sharing of ideas
like tools to use in given situations, so to
work-through or better observe, understand.
The panoptics already occurring yet not yet
aligned in a coherent way to enable supra-
structure of observation beyond the finite.
The biggest challenge for the personal shift
is achieving grounding as an observer, so
to be able to function in diverse conditions,
to retain balance and clear thinking despite
chaos or biasing or power-based influences.
Logical reasoning is this capacity, for sanity.
For good decision-making. For neutralization
and error-correction. Self as feedback circuit.
These types of diagnostics arrive through a
3-value or N-value framework, not the binary
because they are avoidable via onesidedness.
So in this sense, in this individual question of
capacity, it is of the superposition of the self
in its potential to function at its highest level,
such that the [~individual] is in a situation,
and to reach their more true self requires
accurately modeling and governing of the
self, such that by moving in the optimal way
and in the best direction, making the correct
and right decisions, better versus worse, it
then tends towards an improved version,
where this motivation is to the [individual]
in their purified state, as the working-goal.
And thus as [~people] relate with [~people]
more and more through 3-value connections
this becomes [people] relating with ourselves
by figuring out the puzzle, the interconnectivity,
grounding it, establishing shared working-models
and purpose, and this begins within the grey-area.
The logic it involves is deemably common sense.
It just may take awhile to figure it out, how it works
or as language, as method, how to communicate
within the range it provides, more effectively or
within shifted rulesets if not new tools than what
today is allowed for communication and exchange.
It is not a goal to write at length about these things
yet it is required to share the ideas they involve,
in the hope of clarifying the truth proposed to exist
if evaluating this context in terms other than binary.
There is overlap in this attempt at explanation of
aspects involved, likely creating more confusion
of an already confusing situation to evaluate. It
is inherent, it is a limit of a finite observation of
a flawed observer with a high-error rate where
even the signal deteriorates before it is sent.
I appreciate the chance to share these ideas
because they have been like discoveries and
perhaps others may find them of some use.
And in this way, it would seem that there is a
realm of practical logic that is not that of the
expert system of mathematical notation and
its abstract language - yet can have a similar
effect for human reasoning and processing,
as fuzzy sets have for computers & robots.
And it is seemingly this 3-value and N-value
ability that opens up reasoning to the way the
world may more actually exist in its modeling.
The questions of the N-dimensional realm that
exist beyond the ordinary boundaries of today,
of ruling and banal opinions, where the median
condition becomes the height of everything as
it is mediocritized. Language serves this well.
It is a great leveler for status quo consensus.
Like always picking the fruit low to the ground,
the atmosphere of ideas made inaccessible.
Brian Carroll
# distributed via <nettime>: no commercial use without permission
# <nettime> is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: http://mx.kein.org/mailman/listinfo/nettime-l
# archive: http://www.nettime.org contact: nettime@kein.org